28 research outputs found

    Behavior and event detection for annotation and surveillance

    Get PDF
    Visual surveillance and activity analysis is an active research field of computer vision. As a result, there are several different algorithms produced for this purpose. To obtain more robust systems it is desirable to integrate the different algorithms. To achieve this goal, the paper presents results in automatic event detection in surveillance videos, and a distributed application framework for supporting these methods. Results in motion analysis for static and moving cameras, automatic fight detection, shadow segmentation, discovery of unusual motion patterns, indexing and retrieval will be presented. These applications perform real time, and are suitable for real life applications

    Detection of unusual optical flow patterns by multilevel hidden Markov models

    No full text
    The analysis of motion information is one of the main tools for the understanding of complex behaviors in video. However, due to the quality of the optical flow of low-cost surveillance camera systems and the complexity of motion, new robust image-processing methods are required to generate reliable higher-level information. In our novel approach there is no need for tracking objects (vehicles, pedestrians) in order to recognize anomalous motion, but dense optical flow information is used to construct mixtures of Gaussians, which are analyzed temporally. We create a multilevel model, where low-level states of non-overlapping image regions are modeled by continuous hidden Markov models (HMMs). From low-level HMMs we compose high-level HMMs to analyze the occurrence of the low-level states. The processing of large numbers of data in traditional HMMs can result in a precision problem due to the multiplication of low probability values. Thus, besides introducing new motion models, we incorporate a scaling technique into the mathematical model of HMMs to avoid precision problems and to get an effective tool for the analysis of large numbers of motion vectors. We illustrate the use of our models with real-life traffic videos

    Analysis of time-multiplexed security videos

    No full text

    HMM-based unusual motion detection without tracking

    No full text

    About the Application of Autoencoders For Visual Defect Detection

    Get PDF
    Visual defect detection is a key technology in modern industrial manufacturing systems. There are many possibleappearances of product defects, including distortions in color, shape, contamination, missing or superfluous parts.For the detection of those, besides traditional image processing techniques, convolutional neural networks basedmethods have also appeared to avoid the usage of hand-crafted features and to build more efficient detectionmechanisms. In our article we deal with autoencoder convolutional networks (AEs) which do not require examplesof defects for training. Unfortunately, the manual and/or trial-and-error design of AEs is still required to achievegood performance, since there are many unknown parameters of AEs which can greatly influence the detectionabilities. For our study we have chosen a well performing AE known as structural similarity AE (SSIM-AE),where the loss function and the comparison of the output with the input is implemented via the SSIM instead ofthe often used L1 or L2 norms. Investigating the performance of SSIM-AE on different data-sets, we found that itsperformance can be improved with modified convolutional structures without modifying the size of latent space.We also show that finding a model with low reconstruction error during training does not mean good detectionabilities and denoising AEs can increase efficiency

    Lightweight Active Object Retrieval with Weak Classifiers

    No full text
    In the last few years, there has been a steadily growing interest in autonomous vehicles and robotic systems. While many of these agents are expected to have limited resources, these systems should be able to dynamically interact with other objects in their environment. We present an approach where lightweight sensory and processing techniques, requiring very limited memory and processing power, can be successfully applied to the task of object retrieval using sensors of different modalities. We use the Hough framework to fuse optical and orientation information of the different views of the objects. In the presented spatio-temporal perception technique, we apply active vision, where, based on the analysis of initial measurements, the direction of the next view is determined to increase the hit-rate of retrieval. The performance of the proposed methods is shown on three datasets loaded with heavy noise
    corecore